deep graph embedding
Premise Selection for Theorem Proving by Deep Graph Embedding
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.
Reviews: Premise Selection for Theorem Proving by Deep Graph Embedding
The paper addresses the problem of premise selection for theorem proving. The objective in this line of work is to learn a vector representation for the formulas. This vector representation is then used as the input to a downstream machine learning model performing premise selection. The contributions of the paper are: (1) a representation of formulas as graph; (2) learning vector representations for the nodes and ultimately for the formula graphs; and (3) show empirically that this representation is superior to the state of the art on theorem proving data sets. Pros * The paper is very well written.
Premise Selection for Theorem Proving by Deep Graph Embedding
Wang, Mingzhe, Tang, Yihe, Wang, Jian, Deng, Jia
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%. Papers published at the Neural Information Processing Systems Conference.